322 8.2  Molecular Simulation Methods

of each respective force field can be embodied in a simple single spring constant, so a typical

total empirical potential energy in practice might be

(8.14)

U

k

r

r

k

V

total

bound

r

eq

angles

eq

dthedrals

n

=

(

) +

(

) +

+

2

2

2 1

θ θ

θ

coos

0

n

A

r

B

r

q q

r

eq

i

j

ij

ij

ij

ij

i

j

r

φ

φ

πε ε

(

)

(

)

+

+

<

12

6

4

ij

where

kr, kθ are consensus mean stiffness parameters relating to bond strengths and angles,

respectively

r, θ, and ϕ are bond lengths, angles, and dihedrals

Vn is the dihedral energy barrier of order n, where n is a positive integer (typically 1 or 2)

req, θeq, and ϕeq are equilibrium values, determined from either QM simulation or sep­

arate biophysical experimentation

For example, x-​ray crystallography (Chapter 5) might generate estimates for req, whereas kr

can be estimated from techniques such as Raman spectroscopy (Chapter 4) or IR absorption

(Chapter 3).

In a simulation, the sum of all unique pairwise interaction potentials for n atoms indicates

that the numbers of force, velocity, positions, etc., calculations required scales as ~O(n2). That

is to say, if the number of atoms simulated in system II is twice as many as those in system I,

then the computation time taken for each Δt simulation step will be greater by a factor of ~4.

Some computational improvement can be made by truncation. That is, applying a cutoff for

nonbonded force calculations such that only atoms within a certain distance of each other

are assumed to interact. Doing this reduces the computational scaling factor to ~O(n), with

the caveat of introducing an additional error to the computed force field. However, in several

routine simulations, this is an acceptable compromise provided the cutoff does not imply the

creation of separate ions.

In principle though, the nonbonding force field is due to a many-​body potential energy

function, that is, not simply the sum of multiple single pairwise interactions for each atom

in the system, as embodied by the simple Lennard–​Jones potential, but also including the

effects of more than two interacting atoms, not just the single interaction between one given

atom and its nearest neighbor. For example, to consider all the interactions involved between

the nearest and the next-​nearest neighbor for each atom in the system, the computation

time required would scale as ~O(n3). More complex computational techniques to account for

these higher-​order interactions include the particle mesh Ewald (PME or just PM) summation

method (see Chapter 2), which can reduce the number of computations required by discret­

izing the atoms on a grid, resulting in a computation scaling factor of ~O(nα) where 1 < α < 2,

or the PME variant of the Particle–​Particle–​Particle–​Pesh (PPPM or P3M).

PM is a Fourier-​based Ewald summation technique, optimized for determining the

potentials in many-​particle simulations. Particles (atoms in the case of MD) are first

interpolated onto a grid/​mesh; in other words, each particle is discretized to the nearest grid

point. The potential energy is then determined for each grid point, as opposed to the original

positions of the particles. Obviously this interpolation introduces errors into the calculation

of the force field, depending on how fine a mesh is used, but since the mesh has regular spatial

periodicity, it is ideal for Fourier transformation (FT) analysis since the FT of the total poten­

tial is then a weighted sum of the FTs of the potentials at each grid point, and so the inverse

FT of this weighted sum generates the total potential. Direct calculation of each discrete FT

would still yield a ~O(m2) scaling factor for computation for calculating pairwise-​dependent

potentials where m is the number of grid points on the mesh, but for sensible sampling m

scales as ~O(n), and so the overall scaling factor is ~O(n2). However, if the fast Fourier trans­

form (FFT) is used, then the number of calculations required for an n-​size dataset is ~n loge

n, so the computation scaling factor reduces to ~O(n loge n).